The deployment flexibility and maneuverability of Unmanned Aerial Vehicles (UAVs) increased their adoption in various applications, such as wildfire tracking, border monitoring, etc. In many critical applications, UAVs capture images and other sensory data and then send the captured data to remote servers for inference and data processing tasks. However, this approach is not always practical in real-time applications due to the connection instability, limited bandwidth, and end-to-end latency. One promising solution is to divide the inference requests into multiple parts (layers or segments), with each part being executed in a different UAV based on the available resources. Furthermore, some applications require the UAVs to traverse certain areas and capture incidents; thus, planning their paths becomes critical particularly, to reduce the latency of making the collaborative inference process. Specifically, planning the UAVs trajectory can reduce the data transmission latency by communicating with devices in the same proximity while mitigating the transmission interference. This work aims to design a model for distributed collaborative inference requests and path planning in a UAV swarm while respecting the resource constraints due to the computational load and memory usage of the inference requests. The model is formulated as an optimization problem and aims to minimize latency. The formulated problem is NP-hard so finding the optimal solution is quite complex; thus, this paper introduces a real-time and dynamic solution for online applications using deep reinforcement learning. We conduct extensive simulations and compare our results to the-state-of-the-art studies demonstrating that our model outperforms the competing models.
translated by 谷歌翻译
Object detectors are conventionally trained by a weighted sum of classification and localization losses. Recent studies (e.g., predicting IoU with an auxiliary head, Generalized Focal Loss, Rank & Sort Loss) have shown that forcing these two loss terms to interact with each other in non-conventional ways creates a useful inductive bias and improves performance. Inspired by these works, we focus on the correlation between classification and localization and make two main contributions: (i) We provide an analysis about the effects of correlation between classification and localization tasks in object detectors. We identify why correlation affects the performance of various NMS-based and NMS-free detectors, and we devise measures to evaluate the effect of correlation and use them to analyze common detectors. (ii) Motivated by our observations, e.g., that NMS-free detectors can also benefit from correlation, we propose Correlation Loss, a novel plug-in loss function that improves the performance of various object detectors by directly optimizing correlation coefficients: E.g., Correlation Loss on Sparse R-CNN, an NMS-free method, yields 1.6 AP gain on COCO and 1.8 AP gain on Cityscapes dataset. Our best model on Sparse R-CNN reaches 51.0 AP without test-time augmentation on COCO test-dev, reaching state-of-the-art. Code is available at https://github.com/fehmikahraman/CorrLoss
translated by 谷歌翻译
Person re-identification is a challenging task because of the high intra-class variance induced by the unrestricted nuisance factors of variations such as pose, illumination, viewpoint, background, and sensor noise. Recent approaches postulate that powerful architectures have the capacity to learn feature representations invariant to nuisance factors, by training them with losses that minimize intra-class variance and maximize inter-class separation, without modeling nuisance factors explicitly. The dominant approaches use either a discriminative loss with margin, like the softmax loss with the additive angular margin, or a metric learning loss, like the triplet loss with batch hard mining of triplets. Since the softmax imposes feature normalization, it limits the gradient flow supervising the feature embedding. We address this by joining the losses and leveraging the triplet loss as a proxy for the missing gradients. We further improve invariance to nuisance factors by adding the discriminative task of predicting attributes. Our extensive evaluation highlights that when only a holistic representation is learned, we consistently outperform the state-of-the-art on the three most challenging datasets. Such representations are easier to deploy in practical systems. Finally, we found that joining the losses removes the requirement for having a margin in the softmax loss while increasing performance.
translated by 谷歌翻译
Generalist models, which are capable of performing diverse multi-modal tasks in a task-agnostic way within a single model, have been explored recently. Being, hopefully, an alternative to approaching general-purpose AI, existing generalist models are still at an early stage, where modality and task coverage is limited. To empower multi-modal task-scaling and speed up this line of research, we release a generalist model learning system, OFASys, built on top of a declarative task interface named multi-modal instruction. At the core of OFASys is the idea of decoupling multi-modal task representations from the underlying model implementations. In OFASys, a task involving multiple modalities can be defined declaratively even with just a single line of code. The system automatically generates task plans from such instructions for training and inference. It also facilitates multi-task training for diverse multi-modal workloads. As a starting point, we provide presets of 7 different modalities and 23 highly-diverse example tasks in OFASys, with which we also develop a first-in-kind, single model, OFA+, that can handle text, image, speech, video, and motion data. The single OFA+ model achieves 95% performance in average with only 16% parameters of 15 task-finetuned models, showcasing the performance reliability of multi-modal task-scaling provided by OFASys. Available at https://github.com/OFA-Sys/OFASys
translated by 谷歌翻译
Synthesizing high-fidelity videos from real-world multi-view input is challenging because of the complexities of real-world environments and highly dynamic motions. Previous works based on neural radiance fields have demonstrated high-quality reconstructions of dynamic scenes. However, training such models on real-world scenes is time-consuming, usually taking days or weeks. In this paper, we present a novel method named MixVoxels to better represent the dynamic scenes with fast training speed and competitive rendering qualities. The proposed MixVoxels represents the 4D dynamic scenes as a mixture of static and dynamic voxels and processes them with different networks. In this way, the computation of the required modalities for static voxels can be processed by a lightweight model, which essentially reduces the amount of computation, especially for many daily dynamic scenes dominated by the static background. To separate the two kinds of voxels, we propose a novel variation field to estimate the temporal variance of each voxel. For the dynamic voxels, we design an inner-product time query method to efficiently query multiple time steps, which is essential to recover the high-dynamic motions. As a result, with 15 minutes of training for dynamic scenes with inputs of 300-frame videos, MixVoxels achieves better PSNR than previous methods. Codes and trained models are available at https://github.com/fengres/mixvoxels
translated by 谷歌翻译
仿真环境的兴起已经实现了基于学习的组装计划的方法,否则这是一项劳动密集型和艰巨的任务。组装家具特别有趣,因为家具是复杂的,对基于学习的方法构成了挑战。令人惊讶的是,人类可以解决组装产品的2D快照。尽管近年来见证了家具组装的有希望的基于学习的方法,但他们假设每个组装步骤都有正确的连接标签,这在实践中很昂贵。在本文中,我们减轻了这一假设,并旨在以尽可能少的人类专业知识和监督来解决家具。具体而言,我们假设组装点云的可用性,并比较当前组件的点云和目标产品的点云,请根据两种措施获得新的奖励信号:不正确和不完整。我们表明,我们的新颖奖励信号可以训练一个深层网络,以成功组装不同类型的家具。可用的代码和网络:https://github.com/metu-kalfa/assemblerl
translated by 谷歌翻译
痴呆症是一种神经精神脑障碍,通常会在一个或多个脑细胞停止部分或根本停止工作时发生。在疾病的早期阶段诊断这种疾病是从不良后果中挽救生命并为他们提供更好的医疗保健的至关重要的任务。事实证明,机器学习方法在预测疾病早期痴呆症方面是准确的。痴呆的预测在很大程度上取决于通常从归一化的全脑体积(NWBV)和地图集缩放系数(ASF)收集的收集数据类型,这些数据通常测量并从磁共振成像(MRIS)中进行校正。年龄和性别等其他生物学特征也可以帮助诊断痴呆症。尽管许多研究使用机器学习来预测痴呆症,但我们无法就这些方法的稳定性得出结论,而这些方法在不同的实验条件下更准确。因此,本文研究了有关痴呆预测的机器学习算法的性能的结论稳定性。为此,使用7种机器学习算法和两种功能还原算法,即信息增益(IG)和主成分分析(PCA)进行大量实验。为了检查这些算法的稳定性,IG的特征选择阈值从20%更改为100%,PCA尺寸从2到8。这导致了7x9 + 7x7 = 112实验。在每个实验中,都记录了各种分类评估数据。获得的结果表明,在七种算法中,支持向量机和天真的贝叶斯是最稳定的算法,同时更改选择阈值。同样,发现使用IG似乎比使用PCA预测痴呆症更有效。
translated by 谷歌翻译
徽标检索是一个具有挑战性的问题,因为与图像检索任务相比,相似性的定义更为主观,并且已知相似性的集合非常稀缺。为了应对这一挑战,在本文中,我们提出了一种简单但有效的基于细分市场的增强策略,以引入人工相似的徽标,以训练徽标检索的深层网络。在这种新颖的增强策略中,我们首先在徽标中找到细分市场,并在细分市场上应用旋转,缩放和颜色变化等转换,这与传统的图像级增强策略不同。此外,我们评估最近引入的基于排名的损失函数Smooth-AP是否是学习徽标检索相似性的更好方法。在大规模的METU商标数据集上,我们表明(i)基于细分市场的增强策略与基线模型或图像级增强策略相比提高了检索性能,并且(ii)平滑 - AP的表现确实比徽标的常规损失更好恢复。
translated by 谷歌翻译
由于无人机成本降低并且无人机技术有所改善,无人机检测已成为对象检测的重要任务。但是,当对比度较弱,远距离可见度较弱时,很难检测到遥远的无人机。在这项工作中,我们提出了几个序列分类体系结构,以减少无人机轨道检测到的假阳性比率。此外,我们提出了一个新的无人机与鸟类序列分类数据集,以训练和评估拟议的架构。3D CNN,LSTM和基于变压器的序列分类体系结构已在拟议的数据集上进行了培训,以显示提出的思想的有效性。如实验所示,使用序列信息,鸟类分类和整体F1分数可以分别提高73%和35%。在所有序列分类模型中,基于R(2+1)D的完全卷积模型可产生最佳的转移学习和微调结果。
translated by 谷歌翻译
顺序建议要求推荐人从已记录的用户行为数据中捕获不断发展的行为特征,以进行准确的建议。但是,用户行为序列被视为具有多个正在进行的线程交织在一起的脚本。我们发现,只有一小部分关键行为才能发展为用户的未来动作。结果,用户的未来行为很难预测。我们将每个用户作为行为途径的顺序行为的特征得出结论。不同的用户具有独特的行为途径。在现有的顺序模型中,变压器在捕获全球依赖性特征方面表现出很大的能力。但是,这些模型主要使用自我注意力的机制在所有先前的行为上提供了密集的分布,这使得最终预测被未调整给每个用户的微不足道行为所淹没。在本文中,我们使用新颖的途径注意机制构建了推荐变压器(RETR)。 REOR可以动态地计划为每个用户指定的行为途径,并通过此行为途径很少激活网络,以有效捕获对推荐有用的演变模式。关键设计是一种博学的二进制途径,以防止行为途径被微不足道的行为淹没。我们从经验上验证了RERO在七个现实世界数据集中的有效性,并产生了最先进的性能。
translated by 谷歌翻译